Sam Altman in a blog post titled Reflections looks back at what OpenAI has done and then predicts that they know how to build AGI,
We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
It is worth noting that the definition of AGI (Artificial General Intelligence) is sufficiently vague that meeting this target could become a matter of semantics. None the less, here are some definitions of AGI from OpenAI or others about OpenAI,
- “OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” – Note the “economically valuable work”. I wonder if philosophizing or making art is valuable? Is intelligence being limited here to economics?
- “AI systems that are generally smarter than humans” – This is somewhat circular as brings us back to defining “smartness”, another work for “intelligence”.
- “any system that can outperform humans at most tasks” – This could be timed to the quote above and the idea of AI agents that can work for companies outperforming humans. It seems to me we are nowhere near this if you include physical tasks.
- “an AI system that can generate at least $100 billion in profits” – This is the definition used by OpenAI and Microsoft to help identify when OpenAI doesn’t have to share technology with Microsoft any more.